Remote sensing of the Earth's surface water is critical in a wide range of environmental studies, from evaluating the societal impacts of seasonal droughts and floods to the large-scale implications of climate change. Consequently, a large literature exists on the classification of water from satellite imagery. Yet, previous methods have been limited by 1) the spatial resolution of public satellite imagery, 2) classification schemes that operate at the pixel level, and 3) the need for multiple spectral bands. We advance the state-of-the-art by 1) using commercial imagery with panchromatic and multispectral resolutions of 30 cm and 1.2 m, respectively, 2) developing multiple fully convolutional neural networks (FCN) that can learn the morphological features of water bodies in addition to their spectral properties, and 3) FCN that can classify water even from panchromatic imagery. This study focuses on rivers in the Arctic, using images from the Quickbird, WorldView, and GeoEye satellites. Because no training data are available at such high resolutions, we construct those manually. First, we use the RGB, and NIR bands of the 8-band multispectral sensors. Those trained models all achieve excellent precision and recall over 90% on validation data, aided by on-the-fly preprocessing of the training data specific to satellite imagery. In a novel approach, we then use results from the multispectral model to generate training data for FCN that only require panchromatic imagery, of which considerably more is available. Despite the smaller feature space, these models still achieve a precision and recall of over 85%. We provide our open-source codes and trained model parameters to the remote sensing community, which paves the way to a wide range of environmental hydrology applications at vastly superior accuracies and 2 orders of magnitude higher spatial resolution than previously possible.
translated by 谷歌翻译
Embedding words in vector space is a fundamental first step in state-of-the-art natural language processing (NLP). Typical NLP solutions employ pre-defined vector representations to improve generalization by co-locating similar words in vector space. For instance, Word2Vec is a self-supervised predictive model that captures the context of words using a neural network. Similarly, GLoVe is a popular unsupervised model incorporating corpus-wide word co-occurrence statistics. Such word embedding has significantly boosted important NLP tasks, including sentiment analysis, document classification, and machine translation. However, the embeddings are dense floating-point vectors, making them expensive to compute and difficult to interpret. In this paper, we instead propose to represent the semantics of words with a few defining words that are related using propositional logic. To produce such logical embeddings, we introduce a Tsetlin Machine-based autoencoder that learns logical clauses self-supervised. The clauses consist of contextual words like "black," "cup," and "hot" to define other words like "coffee," thus being human-understandable. We evaluate our embedding approach on several intrinsic and extrinsic benchmarks, outperforming GLoVe on six classification tasks. Furthermore, we investigate the interpretability of our embedding using the logical representations acquired during training. We also visualize word clusters in vector space, demonstrating how our logical embedding co-locate similar words.
translated by 谷歌翻译
Plastic shopping bags that get carried away from the side of roads and tangled on cotton plants can end up at cotton gins if not removed before the harvest. Such bags may not only cause problem in the ginning process but might also get embodied in cotton fibers reducing its quality and marketable value. Therefore, it is required to detect, locate, and remove the bags before cotton is harvested. Manually detecting and locating these bags in cotton fields is labor intensive, time-consuming and a costly process. To solve these challenges, we present application of four variants of YOLOv5 (YOLOv5s, YOLOv5m, YOLOv5l and YOLOv5x) for detecting plastic shopping bags using Unmanned Aircraft Systems (UAS)-acquired RGB (Red, Green, and Blue) images. We also show fixed effect model tests of color of plastic bags as well as YOLOv5-variant on average precision (AP), mean average precision (mAP@50) and accuracy. In addition, we also demonstrate the effect of height of plastic bags on the detection accuracy. It was found that color of bags had significant effect (p < 0.001) on accuracy across all the four variants while it did not show any significant effect on the AP with YOLOv5m (p = 0.10) and YOLOv5x (p = 0.35) at 95% confidence level. Similarly, YOLOv5-variant did not show any significant effect on the AP (p = 0.11) and accuracy (p = 0.73) of white bags, but it had significant effects on the AP (p = 0.03) and accuracy (p = 0.02) of brown bags including on the mAP@50 (p = 0.01) and inference speed (p < 0.0001). Additionally, height of plastic bags had significant effect (p < 0.0001) on overall detection accuracy. The findings reported in this paper can be useful in speeding up removal of plastic bags from cotton fields before harvest and thereby reducing the amount of contaminants that end up at cotton gins.
translated by 谷歌翻译
Human civilization has an increasingly powerful influence on the earth system. Affected by climate change and land-use change, natural disasters such as flooding have been increasing in recent years. Earth observations are an invaluable source for assessing and mitigating negative impacts. Detecting changes from Earth observation data is one way to monitor the possible impact. Effective and reliable Change Detection (CD) methods can help in identifying the risk of disaster events at an early stage. In this work, we propose a novel unsupervised CD method on time series Synthetic Aperture Radar~(SAR) data. Our proposed method is a probabilistic model trained with unsupervised learning techniques, reconstruction, and contrastive learning. The change map is generated with the help of the distribution difference between pre-incident and post-incident data. Our proposed CD model is evaluated on flood detection data. We verified the efficacy of our model on 8 different flood sites, including three recent flood events from Copernicus Emergency Management Services and six from the Sen1Floods11 dataset. Our proposed model achieved an average of 64.53\% Intersection Over Union(IoU) value and 75.43\% F1 score. Our achieved IoU score is approximately 6-27\% and F1 score is approximately 7-22\% better than the compared unsupervised and supervised existing CD methods. The results and extensive discussion presented in the study show the effectiveness of the proposed unsupervised CD method.
translated by 谷歌翻译
Human activity recognition (HAR) using drone-mounted cameras has attracted considerable interest from the computer vision research community in recent years. A robust and efficient HAR system has a pivotal role in fields like video surveillance, crowd behavior analysis, sports analysis, and human-computer interaction. What makes it challenging are the complex poses, understanding different viewpoints, and the environmental scenarios where the action is taking place. To address such complexities, in this paper, we propose a novel Sparse Weighted Temporal Attention (SWTA) module to utilize sparsely sampled video frames for obtaining global weighted temporal attention. The proposed SWTA is comprised of two parts. First, temporal segment network that sparsely samples a given set of frames. Second, weighted temporal attention, which incorporates a fusion of attention maps derived from optical flow, with raw RGB images. This is followed by a basenet network, which comprises a convolutional neural network (CNN) module along with fully connected layers that provide us with activity recognition. The SWTA network can be used as a plug-in module to the existing deep CNN architectures, for optimizing them to learn temporal information by eliminating the need for a separate temporal stream. It has been evaluated on three publicly available benchmark datasets, namely Okutama, MOD20, and Drone-Action. The proposed model has received an accuracy of 72.76%, 92.56%, and 78.86% on the respective datasets thereby surpassing the previous state-of-the-art performances by a margin of 25.26%, 18.56%, and 2.94%, respectively.
translated by 谷歌翻译
Drone-camera based human activity recognition (HAR) has received significant attention from the computer vision research community in the past few years. A robust and efficient HAR system has a pivotal role in fields like video surveillance, crowd behavior analysis, sports analysis, and human-computer interaction. What makes it challenging are the complex poses, understanding different viewpoints, and the environmental scenarios where the action is taking place. To address such complexities, in this paper, we propose a novel Sparse Weighted Temporal Fusion (SWTF) module to utilize sparsely sampled video frames for obtaining global weighted temporal fusion outcome. The proposed SWTF is divided into two components. First, a temporal segment network that sparsely samples a given set of frames. Second, weighted temporal fusion, that incorporates a fusion of feature maps derived from optical flow, with raw RGB images. This is followed by base-network, which comprises a convolutional neural network module along with fully connected layers that provide us with activity recognition. The SWTF network can be used as a plug-in module to the existing deep CNN architectures, for optimizing them to learn temporal information by eliminating the need for a separate temporal stream. It has been evaluated on three publicly available benchmark datasets, namely Okutama, MOD20, and Drone-Action. The proposed model has received an accuracy of 72.76%, 92.56%, and 78.86% on the respective datasets thereby surpassing the previous state-of-the-art performances by a significant margin.
translated by 谷歌翻译
We present the Habitat-Matterport 3D Semantics (HM3DSEM) dataset. HM3DSEM is the largest dataset of 3D real-world spaces with densely annotated semantics that is currently available to the academic community. It consists of 142,646 object instance annotations across 216 3D spaces and 3,100 rooms within those spaces. The scale, quality, and diversity of object annotations far exceed those of prior datasets. A key difference setting apart HM3DSEM from other datasets is the use of texture information to annotate pixel-accurate object boundaries. We demonstrate the effectiveness of HM3DSEM dataset for the Object Goal Navigation task using different methods. Policies trained using HM3DSEM perform outperform those trained on prior datasets. Introduction of HM3DSEM in the Habitat ObjectNav Challenge lead to an increase in participation from 400 submissions in 2021 to 1022 submissions in 2022.
translated by 谷歌翻译
自闭症,也称为自闭症谱系障碍(或ASD),是一种神经系统疾病。它的主要症状包括(口头和/或非语言)交流的难度以及僵化/重复的行为。这些症状通常与正常(对照)个体没有区别,因此这种疾病在幼儿期间仍未诊断,导致治疗延迟。由于学习曲线在最初年龄段是陡峭的,因此对自闭症的早期诊断可以在适当的时间进行足够的干预措施,这可能会对自闭症儿童的成长产生积极影响。此外,传统的自闭症诊断方法需要多次访问专门的精神科医生,但是这一过程可能很耗时。在本文中,我们提出了一种基于学习的方法,可以使用简单和小型动作视频剪辑的主题自闭症诊断。此任务尤其具有挑战性,因为可用的带注释数据的量很小,并且两类(ASD和控制)的样本之间的变化通常是无法区分的。从基线编码器顶部的跨凝结损失学到的二进制分类器的性能不佳也可以明显看出这一点。为了解决这个问题,我们在自我监督和监督的学习框架中采用对比功能学习,并表明这些学习可能会导致二元分类器对此任务的预测准确性显着提高。我们通过对两个公开可用数据集的不同设置进行彻底的实验分析来进一步验证这一点。
translated by 谷歌翻译
Boll Weevil(Anthonomus Grandis L.)是一种严重的害虫,主要以棉花为食。由于亚热带气候条件,在德克萨斯州的下里奥格兰德山谷等地方,棉花植物可以全年生长,因此,收获期间上一个季节的剩下的种子可以在玉米中的旋转中继续生长(Zea Mays L.)和高粱(高粱双色L.)。这些野性或志愿棉花(VC)植物到达Pinhead平方阶段(5-6叶阶段)可以充当Boll Weevil Pest的宿主。得克萨斯州的鲍尔象鼻虫根除计划(TBWEP)雇用人们在道路或田野侧面生长的风险投资和消除旋转作物的田间生长,但在田野中生长的植物仍未被发现。在本文中,我们证明了基于您的计算机视觉(CV)算法的应用,仅在三个不同的生长阶段(V3,V6)(V3,V6)中检测出在玉米场中生长的VC植物,以检测在玉米场中生长的VC植物的应用。使用无人飞机系统(UAS)遥感图像。使用Yolov5(S,M,L和X)的所有四个变体,并根据分类精度,平均平均精度(MAP)和F1得分进行比较。发现Yolov5s可以在玉米的V6阶段检测到最大分类精度为98%,地图为96.3%,而Yolov5s和Yolov5m的地图为96.3%,而Yolov5m的分类精度为85%,Yolov5m和Yolov5m的分类准确性最小,而Yolov5L的分类精度最少。在VT阶段,在尺寸416 x 416像素的图像上为86.5%。开发的CV算法有可能有效地检测和定位在玉米场中间生长的VC植物,并加快TBWEP的管理方面。
translated by 谷歌翻译
为了控制棉花场中的鲍尔象鼻虫(Anthonomus Grandis L.)害虫重新感染,目前的志愿棉花(VC)(VC)(gossypium hirsutum L.)植物检测玉米(Zea Mays L.)和Sorghum等旋转作物中的植物检测(高粱双色L.)涉及在田野边缘的手动田地侦察。这导致许多风险植物在田野中间生长仍未被发现,并继续与玉米和高粱并肩生长。当他们到达Pinhead平方阶段(5-6片叶子)时,它们可以充当鲍尔维尔虫害的宿主。因此,需要检测,定位,然后精确地用化学物质进行斑点。在本文中,我们介绍了Yolov5M在放射线和伽马校正的低分辨率(1.2兆像素)的多光谱图像中的应用,以检测和定位在康沃尔场的流苏中间(VT)生长阶段生长的VC植物。我们的结果表明,可以以平均平均精度(地图)为79%,分类精度为78%,大小为1207 x 923像素的分类精度为78%,平均推理速度在NVIDIA上的平均推理速度接近47帧(FPS) NVIDIA JETSON TX2 GPU上的Tesla P100 GPU-16GB和0.4 fps。我们还证明了基于开发的计算机视觉(CV)算法的定制无人飞机系统(UAS)的应用应用程序应用程序,以及如何将其用于近乎实时检测和缓解玉米领域中VC植物的近乎实时检测和缓解为了有效地管理鲍尔象鼻虫害虫。
translated by 谷歌翻译